4 research outputs found

    Positive dependence in qualitative probabilistic networks

    Full text link
    Qualitative probabilistic networks (QPNs) combine the conditional independence assumptions of Bayesian networks with the qualitative properties of positive and negative dependence. They formalise various intuitive properties of positive dependence to allow inferences over a large network of variables. However, we will demonstrate in this paper that, due to an incorrect symmetry property, many inferences obtained in non-binary QPNs are not mathematically true. We will provide examples of such incorrect inferences and briefly discuss possible resolutions.Comment: 10 pages, 3 figure

    Partial correlation based penalty functions and prior distributions for Gaussian graphical models

    Get PDF
    Graphical models are a useful tool for encoding conditional independence relations. A common goal is to select the graphical model that best describes the conditional independence relationships between variables given observations of these variables. Under the additional Gaussian assumption, conditional independence is equivalent to zero entries in the inverse covariance matrix Ɵ. Thus sparse estimation of Ɵ in turn specifies a graphical model and the associated conditional independencies. Popular frequentist methods for this often involve placing a penalty function on Ɵ and maximising a penalised likelihood, whilst Bayesian methods require specification of a prior distribution on Ɵ. Conditional independence relations are invariant to non-zero scalar multiplication of the variables, however in this thesis we show that essentially all current penalised likelihood methods and many prior distributions are not invariant to such transformations of the variables. In fact many methods are very sensitive to rescaling of the variables which can, and often does, result in a vastly different selected graphical model. To remedy this issue we introduce new classes of penalty functions and prior distributions which are based on partial correlations. We show that such penalty functions and prior distributions lead to scale invariant estimation and posterior inference on Ɵ. We pay particular attention to two penalty functions in this class. The partial correlation graphical LASSO places an L1 penalty on the partial correlations whilst the spike and slab partial correlation graphical LASSO is a penalty function based on a spike and slab prior formulation. The performance of these penalty functions is compared to that of current popular penalty functions in simulated and real world settings. We also investigate spike and slab priors in general for Gaussian graphical models and point out that care must be taken when considering the positive definiteness of Ɵ. With this in mind we provide some theoretical results based on Wigner matrices

    Partial Correlation Graphical LASSO

    Full text link
    Standard likelihood penalties to learn Gaussian graphical models are based on regularising the off-diagonal entries of the precision matrix. Such methods, and their Bayesian counterparts, are not invariant to scalar multiplication of the variables, unless one standardises the observed data to unit sample variances. We show that such standardisation can have a strong effect on inference and introduce a new family of penalties based on partial correlations. We show that the latter, as well as the maximum likelihood, L0L_0 and logarithmic penalties are scale invariant. We illustrate the use of one such penalty, the partial correlation graphical LASSO, which sets an L1L_{1} penalty on partial correlations. The associated optimization problem is no longer convex, but is conditionally convex. We show via simulated examples and in two real datasets that, besides being scale invariant, there can be important gains in terms of inference.Comment: 41 pages, 7 figure

    Partial correlation graphical LASSO

    Get PDF
    corecore